首页> 外文OA文献 >Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions
【2h】

Personalized Automatic Estimation of Self-reported Pain Intensity from Facial Expressions

机译:自我报告疼痛强度的个性化自动估计   面部表情

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Pain is a personal, subjective experience that is commonly evaluated throughvisual analog scales (VAS). While this is often convenient and useful,automatic pain detection systems can reduce pain score acquisition efforts inlarge-scale studies by estimating it directly from the participants' facialexpressions. In this paper, we propose a novel two-stage learning approach forVAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levelsfrom face images. The estimated scores are then fed into the personalizedHidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided byeach person. Personalization of the model is performed using a newly introducedfacial expressiveness score, unique for each person. To the best of ourknowledge, this is the first approach to automatically estimate VAS from faceimages. We show the benefits of the proposed personalized over traditionalnon-personalized approach on a benchmark dataset for pain analysis from faceimages.
机译:疼痛是一种个人的主观体验,通常通过视觉模拟量表(VAS)进行评估。尽管这通常很方便且有用,但自动疼痛检测系统可以通过直接从参与者的面部表情进行估算来减少大规模研究中的疼痛评分获取工作。在本文中,我们提出了一种新颖的两步式VAS估计学习方法:首先,我们的算法采用递归神经网络(RNN)从人脸图像中自动估计Prkachin和Solomon疼痛强度(PSPI)的水平。然后,将估计的分数输入每个人提供的个性化隐藏条件随机字段(HCRF),用于估计VAS。该模型的个性化是使用新引入的每个人唯一的面部表情评分来执行的。尽我们所知,这是从面部图像自动估计VAS的第一种方法。我们在基准数据集上显示了从面部图像进行疼痛分析的个性化方法优于传统的非个性化方法的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号